Blog

Podcast Transcript: Are reasoning models fundamentally flawed?


This automatically-generated transcript is taken from the IT Pro Podcast episodeAre Reasoning Models Fundamentally Flawed?We apologize for any errors.

Rory Bathgate

AI reasoning models have emerged in the past year as a beacon of hope for LLMs, with AI developers such as OpenAI, Google, and Anthropic selling them as the go-to solution for solving the most complex business problems. The models are intended to show they’re working out, improving model transparency and the detail of their answers, and have been held up as the future of AI models, they’re sold at a higher price per use than standard LLMs, and most frontier models now come with at least an option for reasoning. But a new research paper by Apple has cast significant doubts on the efficacy of reasoning models, going as far as to suggest that when a problem is too complex, they simply give up. What’s going on here? And does it mean reasoning models are fundamentally flawed? You’re listening to the IT Pro podcast.


Source link

Related Articles

Back to top button
close